human memory
EcphoryRAG: Re-Imagining Knowledge-Graph RAG via Human Associative Memory
Cognitive neuroscience research indicates that humans leverage cues to activate entity-centered memory traces (engrams) for complex, multi-hop recollection. Inspired by this mechanism, we introduce EcphoryRAG, an entity-centric knowledge graph RAG framework. During indexing, EcphoryRAG extracts and stores only core entities with corresponding metadata, a lightweight approach that reduces token consumption by up to 94\% compared to other structured RAG systems. For retrieval, the system first extracts cue entities from queries, then performs a scalable multi-hop associative search across the knowledge graph. Crucially, EcphoryRAG dynamically infers implicit relations between entities to populate context, enabling deep reasoning without exhaustive pre-enumeration of relationships. Extensive evaluations on the 2WikiMultiHop, HotpotQA, and MuSiQue benchmarks demonstrate that EcphoryRAG sets a new state-of-the-art, improving the average Exact Match (EM) score from 0.392 to 0.474 over strong KG-RAG methods like HippoRAG. These results validate the efficacy of the entity-cue-multi-hop retrieval paradigm for complex question answering.
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
"Who Should I Believe?": User Interpretation and Decision-Making When a Family Healthcare Robot Contradicts Human Memory
Wang, Hong, Calvo-Barajas, Natalia, Winkle, Katie, Castellano, Ginevra
Advancements in robotic capabilities for providing physical assistance, psychological support, and daily health management are making the deployment of intelligent healthcare robots in home environments increasingly feasible in the near future. However, challenges arise when the information provided by these robots contradicts users' memory, raising concerns about user trust and decision-making. This paper presents a study that examines how varying a robot's level of transparency and sociability influences user interpretation, decision-making and perceived trust when faced with conflicting information from a robot. In a 2 x 2 between-subjects online study, 176 participants watched videos of a Furhat robot acting as a family healthcare assistant and suggesting a fictional user to take medication at a different time from that remembered by the user. Results indicate that robot transparency influenced users' interpretation of information discrepancies: with a low transparency robot, the most frequent assumption was that the user had not correctly remembered the time, while with the high transparency robot, participants were more likely to attribute the discrepancy to external factors, such as a partner or another household member modifying the robot's information. Additionally, participants exhibited a tendency toward overtrust, often prioritizing the robot's recommendations over the user's memory, even when suspecting system malfunctions or third-party interference. These findings highlight the impact of transparency mechanisms in robotic systems, the complexity and importance associated with system access control for multi-user robots deployed in home environments, and the potential risks of users' over reliance on robots in sensitive domains such as healthcare.
- Research Report > New Finding (1.00)
- Overview (0.93)
- Research Report > Experimental Study > Negative Result (0.68)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Consumer Health (0.88)
Sequence-to-Sequence Models with Attention Mechanistically Map to the Architecture of Human Memory Search
Salvatore, Nikolaus, Zhang, Qiong
Past work has long recognized the important role of context in guiding how humans search their memory. While context-based memory models can explain many memory phenomena, it remains unclear why humans develop such architectures over possible alternatives in the first place. In this work, we demonstrate that foundational architectures in neural machine translation -- specifically, recurrent neural network (RNN)-based sequence-to-sequence models with attention -- exhibit mechanisms that directly correspond to those specified in the Context Maintenance and Retrieval (CMR) model of human memory. Since neural machine translation models have evolved to optimize task performance, their convergence with human memory models provides a deeper understanding of the functional role of context in human memory, as well as presenting new ways to model human memory. Leveraging this convergence, we implement a neural machine translation model as a cognitive model of human memory search that is both interpretable and capable of capturing complex dynamics of learning. We show that our model accounts for both averaged and optimal human behavioral patterns as effectively as context-based memory models. Further, we demonstrate additional strengths of the proposed model by evaluating how memory search performance emerges from the interaction of different model components.
Memory-Maze: Scenario Driven Benchmark and Visual Language Navigation Model for Guiding Blind People
Kuribayashi, Masaki, Uehara, Kohei, Wang, Allan, Sato, Daisuke, Chu, Simon, Morishima, Shigeo
Visual Language Navigation (VLN) powered navigation robots have the potential to guide blind people by understanding and executing route instructions provided by sighted passersby. This capability allows robots to operate in environments that are often unknown a priori. Existing VLN models are insufficient for the scenario of navigation guidance for blind people, as they need to understand routes described from human memory, which frequently contain stutters, errors, and omission of details as opposed to those obtained by thinking out loud, such as in the Room-to-Room dataset. However, currently, there is no benchmark that simulates instructions that were obtained from human memory in environments where blind people navigate. To this end, we present our benchmark, Memory-Maze, which simulates the scenario of seeking route instructions for guiding blind people. Our benchmark contains a maze-like structured virtual environment and novel route instruction data from human memory. To collect natural language instructions, we conducted two studies from sighted passersby onsite and annotators online. Our analysis demonstrates that instructions data collected onsite were more lengthy and contained more varied wording. Alongside our benchmark, we propose a VLN model better equipped to handle the scenario. Our proposed VLN model uses Large Language Models (LLM) to parse instructions and generate Python codes for robot control. We further show that the existing state-of-the-art model performed suboptimally on our benchmark. In contrast, our proposed method outperformed the state-of-the-art model by a fair margin. We found that future research should exercise caution when considering VLN technology for practical applications, as real-world scenarios have different characteristics than ones collected in traditional settings.
- North America > United States > New York (0.04)
- Asia > Japan (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.68)
Fragment Completion in Humans and Machines
Partial information can trigger a complete memory. At the same time, human memory is not perfect. A cue can contain enough information to specify an item in memory, but fail to trigger that item. In the context of word memory, we present experiments that demonstrate some basic patterns in human memory errors. We use cues that consist of word frag- ments.
Using large language models to study human memory for meaningful narratives
Georgiou, Antonios, Can, Tankut, Katkov, Mikhail, Tsodyks, Misha
One of the most impressive achievements of the AI revolution is the development of large language models that can generate meaningful text and respond to instructions in plain English with no additional training necessary. Here we show that language models can be used as a scientific instrument for studying human memory for meaningful material. We developed a pipeline for designing large scale memory experiments and analyzing the obtained results. We performed online memory experiments with a large number of participants and collected recognition and recall data for narratives of different lengths. We found that both recall and recognition performance scale linearly with narrative length. Furthermore, in order to investigate the role of narrative comprehension in memory, we repeated these experiments using scrambled versions of the presented stories. We found that even though recall performance declined significantly, recognition remained largely unaffected. Interestingly, recalls in this condition seem to follow the original narrative order rather than the scrambled presentation, pointing to a contextual reconstruction of the story in memory.
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Israel (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.30)
Aspects of human memory and Large Language Models
Large Language Models (LLMs) are huge artificial neural networks which primarily serve to generate text, but also provide a very sophisticated probabilistic model of language use. Since generating a semantically consistent text requires a form of effective memory, we investigate the memory properties of LLMs and find surprising similarities with key characteristics of human memory. We argue that the human-like memory properties of the Large Language Model do not follow automatically from the LLM architecture but Figure 1: Recall accuracy for a serial memory are rather learned from the statistics of the experiment with human subjects (sample training textual data. These results strongly data from [4]) and for a memorization experiment suggest that the biological features of human of a list of 20 facts of the has-a type for memory leave an imprint on the way that we the Large Language Model GPT-J [5] studied structure our textual narratives.
- Europe > Spain > Galicia > Madrid (0.05)
- South America > Brazil (0.04)
- North America > United States > New York (0.04)
- (20 more...)
Memory as a Mass-based Graph: Towards a Conceptual Framework for the Simulation Model of Human Memory in AI
Mollakazemiha, Mahdi, Fatzade, Hassan
There are two approaches for simulating memory as well as learning in artificial intelligence; the functionalistic approach and the cognitive approach. The necessary condition to put the second approach into account is to provide a model of brain activity that contains a quite good congruence with observational facts such as mistakes and forgotten experiences. Given that human memory has a solid core that includes the components of our identity, our family and our hometown, the major and determinative events of our lives, and the countless repeated and accepted facts of our culture, the more we go to the peripheral spots the data becomes flimsier and more easily exposed to oblivion. It was essential to propose a model in which the topographical differences are quite distinguishable. In our proposed model, we have translated this topographical situation into quantities, which are attributed to the nodes. The result is an edge-weighted graph with mass-based values on the nodes which demonstrates the importance of each atomic proposition, as a truth, for an intelligent being. Furthermore, it dynamically develops and modifies, and in successive phases, it changes the mass of the nodes and weight of the edges depending on gathered inputs from the environment.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > New York (0.04)
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)
Memory in humans and deep language models: Linking hypotheses for model augmentation
Raccah, Omri, Chen, Phoebe, Willke, Ted L., Poeppel, David, Vo, Vy A.
The computational complexity of the self-attention mechanism in Transformer models significantly limits their ability to generalize over long temporal durations. Memory-augmentation, or the explicit storing of past information in external memory for subsequent predictions, has become a constructive avenue for mitigating this limitation. We argue that memory-augmented Transformers can benefit substantially from considering insights from the memory literature in humans. We detail an approach for integrating evidence from the human memory system through the specification of cross-domain linking hypotheses. We then provide an empirical demonstration to evaluate the use of surprisal as a linking hypothesis, and further identify the limitations of this approach to inform future research.
- Europe > Austria > Vienna (0.14)
- North America > United States > New York (0.05)
- North America > United States > Washington > King County > Seattle (0.04)
- (6 more...)
- Research Report > New Finding (0.70)
- Research Report > Experimental Study (0.49)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
The human memory--facts and information
From the moment we are born, our brains are bombarded by an immense amount of information about ourselves and the world around us. So, how do we hold on to everything we've learned and experienced? Humans retain different types of memories for different lengths of time. We also have a working memory, which lets us keep something in our minds for a limited time by repeating it. Whenever you say a phone number to yourself over and over to remember it, you're using your working memory.
- North America > United States > California (0.05)
- Europe > Portugal > Lisbon > Lisbon (0.05)